Goto

Collaborating Authors

 image formation


Reviews: Visual Object Networks: Image Generation with Disentangled 3D Representations

Neural Information Processing Systems

This paper describes a generative model for image formation, with disentangled latent parameters for shape, viewpoint and texture. This is in keeping with the vision as an inverse graphics problem, where image generation is formulated as a parameter search in model space, that when rendered, produces the given image. The difference between the rendered image and the original image is used to train the model. Using inverse graphics as inspiration, this paper learns the following models: 1. An voxel generator that can map the latent 3D shape code to a voxellized 3D shape 2.A differentiable projection module that converts the output of 1 to a 2.5D sketch (depth map) and a silhouette mask, conditional on a latent representation of the required viewpoint 3.A texture generator, that can map the output of 2 to a realistic textured image, conditional on a latent representation of the required texture 4.A 2.5D sketch encoder that can map a 2D image to a 2.5D sketch 5.A texture encoder that maps the texture of the object in a 2D image into a texture latent code The models are learnt adversarially using GANs, and without the need for paired image and shape data using the now common cycle consistency constraints.


WaterNeRF: Neural Radiance Fields for Underwater Scenes

Sethuraman, Advaith Venkatramanan, Ramanagopal, Manikandasriram Srinivasan, Skinner, Katherine A.

arXiv.org Artificial Intelligence

Underwater imaging is a critical task performed by marine robots for a wide range of applications including aquaculture, marine infrastructure inspection, and environmental monitoring. However, water column effects, such as attenuation and backscattering, drastically change the color and quality of imagery captured underwater. Due to varying water conditions and range-dependency of these effects, restoring underwater imagery is a challenging problem. This impacts downstream perception tasks including depth estimation and 3D reconstruction. In this paper, we advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction. Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation, leading to a hybrid data-driven and model-based solution. After determining the scene structure and radiance field, we can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene. We evaluate the proposed method qualitatively and quantitatively on a real underwater dataset.


Differentiable optimization of the Debye-Wolf integral for light shaping and adaptive optics in two-photon microscopy

Vishniakou, Ivan, Seelig, Johannes D.

arXiv.org Artificial Intelligence

Control of light through high numerical aperture (N.A.) objectives is a common requirement in microscopy, for example for engineering specific point spread functions for super-resolution imaging [1, 2], for generating target light distributions for optical stimulation [3, 4, 5], for optical tweezers [6, 7], or for aberration corrections in adaptive optics [8, 9, 10]. For controlling light in all these situations, computational modeling is the most versatile approach for finding a phase pattern that, when displayed on a spatial light modulator (SLM), results in the desired target light distribution in the focal volume. Light propagation through a microscope objective with high N.A. can accurately be described with the vectorial Debye-Wolf diffraction integral [11]. The Debye-Wolf integral takes into account the orientation of the electromagnectic field vector (polarization) which contributes to the shape of the focus in high N.A. objectives. Such effects can be exploited for high resolution imaging, for example with diffraction limited objects, such as single molecules or nanostructures [1, 12, 2]. However, inversion of the Debye-Wolf integral does not have a general closed-form solution [13], and one therefore typically resorts to numerical approaches for applications that aim to generate an intended target light distribution. A fast method for calculating the Debye-Wolf integral would therefore be useful across a range of applications, for example vectorial imaging [14], vectorial beam shaping for tight focusing [2] or superresolution computational imaging [15], as well as any light shaping or imaging applications also at lower resolution, that is where polarization effects have less of an impact.


Check Out The Top 7 Resources To Learn Computer Vision

#artificialintelligence

Computer Vision is the interdisciplinary field of artificial intelligence and computer science, is basically the transition of data from either a still or a video camera into an accurate representation. Just like human vision, a computer vision also works on validating the computers to visualise, recognise and process images. One of the most buzzing fields under artificial intelligence, computer vision has found plenty of use cases in the industry. There are many resources available to come up to speed with computer vision. In this article, we list down 5 best free resources that will come handy in learning computer vision. The list is in no particular order.


The Robot Academy: Lessons in image formation and 3D vision

Robohub

The Robot Academy is a new learning resource from Professor Peter Corke and the Queensland University of Technology (QUT), the team behind the award-winning Introduction to Robotics and Robotic Vision courses. There are over 200 lessons available, all for free. The lessons were created in 2015 for the Introduction to Robotics and Robotic Vision courses. We describe our approach to creating the original courses in the article, An Innovative Educational Change: Massive Open Online Courses in Robotics and Robotic Vision. The courses were designed for university undergraduate students but many lessons are suitable for anybody, as you can easily see the difficulty rating for each lesson.